skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Winecki, Dominik"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Video data can be slow to process due to the size of video streams and the computational complexity needed to decode, transform, and encode them. These challenges are particularly significant in interactive applications, such as quickly generating compilation videos from a user search. We look at optimizing access to source video segments in multimedia systems where multiple separately encoded copies of video sources are available, such as proxy/optimized media in conventional non-linear video editors or VOD streams in content distribution networks. Rather than selecting a single source to use (e.g., "use the lowest-bitrate 720p source"), we specify a minimum visual quality (e.g., "use any frames with VMAF ≥ 85"). This quality constraint and the needed segment bounds are used to find the lowest-latency operations to decode a segment from multiple available sources with diverse bitrates, resolutions, and codecs. This uses higher-quality/slower-to-decode sources if the encoding is better aligned for the specific segment bounds, which can provide faster access than using just one lower-quality source. We provide a general solution to this Quality-Aware Multi-Source Selection problem with optimal computational complexity. We create a dataset using adaptive-bitrate streaming Video on Demand sources from YouTube's CDN. We evaluate our algorithm on simple segment decoding as well as embedded into a larger editing system---a declarative video editor. Our evaluation shows up to 23% lower latency access, depending on segment length, at identical visual quality levels. 
    more » « less
    Free, publicly-accessible full text available March 31, 2026
  2. Abstract We use a multilevel perceptron (MLP) neural network to obtain photometry of saturated stars in the All-Sky Automated Survey for Supernovae (ASAS-SN). The MLP can obtain fairly unbiased photometry for stars fromg≃ 4 to 14 mag, particularly compared to the dispersion (15%–85% 1σrange around the median) of 0.12 mag for saturated (g< 11.5 mag) stars. More importantly, the light curve of a nonvariable saturated star has a median dispersion of only 0.037 mag. The MLP light curves are, in many cases, spectacularly better than those provided by the standard ASAS-SN pipelines. While the network was trained ong-band data from only one of ASAS-SN’s 20 cameras, initial experiments suggest that it can be used for any camera and the older ASAS-SNV-band data as well. The dominant problems seem to be associated with correctable issues in the ASAS-SN data reduction pipeline for saturated stars more than the MLP itself. The method is publicly available as a light-curve option on ASAS-SN Sky Patrol v1.0. 
    more » « less
  3. Querying video data has become increasingly popular and useful. Video queries can be complex, ranging from retrieval tasks (“find me the top videos that have … ”), to analytics (“how many videos contained object X per day?”), to excerpting tasks (“highlight and zoom into scenes with object X near object Y”), or combinations thereof. Results for video queries are still typically shown as either relational data or a primitive collection of clickable thumbnails on a web page. Presenting query results in this form is an impedance mismatch with the video medium: they are cumbersome to skim through and are in a different modality and information density compared to the source data. We describe V2V, a system to efficiently synthesize video results for video queries. V2V returns a fully-edited video, allowing the user to consume results in the same manner as the source videos. A key challenge is that synthesizing video results from a collection of videos is computationally intensive, especially within interactive query response times. To address this, V2V features a grammar to express video transformations in a declarative manner and a heuristic optimizer that improves the efficiency of V2V processing in a manner similar to how databases execute relational queries. Experiments show that our V2V optimizer enables video synthesis to run 3x faster. 
    more » « less
  4. The effective reporting of climate hazards, such as flash floods, hurricanes, and earthquakes, is critical. To quickly and correctly assess the situation and deploy resou rces, emergency services often rely on citizen reports that must be timely, comprehensive, and accurate. The pervasive availability and use of smartphone cameras allow the transmission of dynamic incident information from citizens in near-real-time. While high-quality reporting is beneficial, generating such reports can place an additional burden on citizens who are already suffering from the stress of a climate-related disaster. Furthermore, reporting methods are often challenging to use, due to their length and complexity. In this paper, we explore reducing the friction of climate hazard reporting by automating parts of the form-filling process. By building on existing computer vision and natural language models, we demonstrate the automated generation of a full-form hazard impact assessment report from a single photograph. Our proposed data pipeline can be integrated with existing systems and used with geospatial data solutions, such as flood hazard maps. 
    more » « less